Serveur d'exploration Santé et pratique musicale

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

Identifieur interne : 000F75 ( Main/Exploration ); précédent : 000F74; suivant : 000F76

Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

Auteurs : Clément François [Espagne] ; Daniele Schön [France]

Source :

RBID : pubmed:24035820

Descripteurs français

English descriptors

Abstract

There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations.

DOI: 10.1016/j.heares.2013.08.018
PubMed: 24035820


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.</title>
<author>
<name sortKey="Francois, Clement" sort="Francois, Clement" uniqKey="Francois C" first="Clément" last="François">Clément François</name>
<affiliation wicri:level="2">
<nlm:affiliation>Cognition and Brain Plasticity Unit, Dept. of Basic Psychology (Campus de Bellvitge) & IDIBELL, University of Barcelona, Feixa Llarga s/n, 08907 L'Hospitalet (Barcelona), Spain; Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035 Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Cognition and Brain Plasticity Unit, Dept. of Basic Psychology (Campus de Bellvitge) & IDIBELL, University of Barcelona, Feixa Llarga s/n, 08907 L'Hospitalet (Barcelona), Spain; Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035 Barcelona</wicri:regionArea>
<placeName>
<region nuts="2" type="communauté">Catalogne</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
<affiliation wicri:level="4">
<nlm:affiliation>Aix-Marseille Université, INS, Marseille, France; INSERM, U1106, Marseille, France. Electronic address: daniele.schon@univ-amu.fr.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>Aix-Marseille Université, INS, Marseille, France; INSERM, U1106, Marseille</wicri:regionArea>
<placeName>
<region type="region">Provence-Alpes-Côte d'Azur</region>
<region type="old region">Provence-Alpes-Côte d'Azur</region>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université d'Aix-Marseille</orgName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PubMed</idno>
<date when="2014">2014</date>
<idno type="RBID">pubmed:24035820</idno>
<idno type="pmid">24035820</idno>
<idno type="doi">10.1016/j.heares.2013.08.018</idno>
<idno type="wicri:Area/Main/Corpus">001119</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Corpus" wicri:corpus="PubMed">001119</idno>
<idno type="wicri:Area/Main/Curation">001119</idno>
<idno type="wicri:explorRef" wicri:stream="Main" wicri:step="Curation">001119</idno>
<idno type="wicri:Area/Main/Exploration">001119</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.</title>
<author>
<name sortKey="Francois, Clement" sort="Francois, Clement" uniqKey="Francois C" first="Clément" last="François">Clément François</name>
<affiliation wicri:level="2">
<nlm:affiliation>Cognition and Brain Plasticity Unit, Dept. of Basic Psychology (Campus de Bellvitge) & IDIBELL, University of Barcelona, Feixa Llarga s/n, 08907 L'Hospitalet (Barcelona), Spain; Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035 Barcelona, Spain.</nlm:affiliation>
<country xml:lang="fr">Espagne</country>
<wicri:regionArea>Cognition and Brain Plasticity Unit, Dept. of Basic Psychology (Campus de Bellvitge) & IDIBELL, University of Barcelona, Feixa Llarga s/n, 08907 L'Hospitalet (Barcelona), Spain; Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035 Barcelona</wicri:regionArea>
<placeName>
<region nuts="2" type="communauté">Catalogne</region>
</placeName>
</affiliation>
</author>
<author>
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
<affiliation wicri:level="4">
<nlm:affiliation>Aix-Marseille Université, INS, Marseille, France; INSERM, U1106, Marseille, France. Electronic address: daniele.schon@univ-amu.fr.</nlm:affiliation>
<country xml:lang="fr">France</country>
<wicri:regionArea>Aix-Marseille Université, INS, Marseille, France; INSERM, U1106, Marseille</wicri:regionArea>
<placeName>
<region type="region">Provence-Alpes-Côte d'Azur</region>
<region type="old region">Provence-Alpes-Côte d'Azur</region>
<settlement type="city">Marseille</settlement>
</placeName>
<orgName type="university">Université d'Aix-Marseille</orgName>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Hearing research</title>
<idno type="eISSN">1878-5891</idno>
<imprint>
<date when="2014" type="published">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Acoustic Stimulation (MeSH)</term>
<term>Animals (MeSH)</term>
<term>Auditory Pathways (physiology)</term>
<term>Auditory Perception (physiology)</term>
<term>Brain Mapping (MeSH)</term>
<term>Brain Stem (physiology)</term>
<term>Electroencephalography (MeSH)</term>
<term>Hearing (physiology)</term>
<term>Humans (MeSH)</term>
<term>Language (MeSH)</term>
<term>Learning (MeSH)</term>
<term>Models, Neurological (MeSH)</term>
<term>Music (MeSH)</term>
<term>Neurons (physiology)</term>
<term>Sound (MeSH)</term>
<term>Speech (MeSH)</term>
<term>Speech Perception (physiology)</term>
</keywords>
<keywords scheme="KwdFr" xml:lang="fr">
<term>Animaux (MeSH)</term>
<term>Apprentissage (MeSH)</term>
<term>Cartographie cérébrale (MeSH)</term>
<term>Humains (MeSH)</term>
<term>Langage (MeSH)</term>
<term>Modèles neurologiques (MeSH)</term>
<term>Musique (MeSH)</term>
<term>Neurones (physiologie)</term>
<term>Ouïe (physiologie)</term>
<term>Parole (MeSH)</term>
<term>Perception auditive (physiologie)</term>
<term>Perception de la parole (physiologie)</term>
<term>Son (physique) (MeSH)</term>
<term>Stimulation acoustique (MeSH)</term>
<term>Tronc cérébral (physiologie)</term>
<term>Voies auditives (physiologie)</term>
<term>Électroencéphalographie (MeSH)</term>
</keywords>
<keywords scheme="MESH" qualifier="physiologie" xml:lang="fr">
<term>Neurones</term>
<term>Ouïe</term>
<term>Perception auditive</term>
<term>Perception de la parole</term>
<term>Tronc cérébral</term>
<term>Voies auditives</term>
</keywords>
<keywords scheme="MESH" qualifier="physiology" xml:lang="en">
<term>Auditory Pathways</term>
<term>Auditory Perception</term>
<term>Brain Stem</term>
<term>Hearing</term>
<term>Neurons</term>
<term>Speech Perception</term>
</keywords>
<keywords scheme="MESH" xml:lang="en">
<term>Acoustic Stimulation</term>
<term>Animals</term>
<term>Brain Mapping</term>
<term>Electroencephalography</term>
<term>Humans</term>
<term>Language</term>
<term>Learning</term>
<term>Models, Neurological</term>
<term>Music</term>
<term>Sound</term>
<term>Speech</term>
</keywords>
<keywords scheme="MESH" xml:lang="fr">
<term>Animaux</term>
<term>Apprentissage</term>
<term>Cartographie cérébrale</term>
<term>Humains</term>
<term>Langage</term>
<term>Modèles neurologiques</term>
<term>Musique</term>
<term>Parole</term>
<term>Son (physique)</term>
<term>Stimulation acoustique</term>
<term>Électroencéphalographie</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations.</div>
</front>
</TEI>
<pubmed>
<MedlineCitation Status="MEDLINE" Owner="NLM">
<PMID Version="1">24035820</PMID>
<DateCompleted>
<Year>2014</Year>
<Month>09</Month>
<Day>10</Day>
</DateCompleted>
<DateRevised>
<Year>2014</Year>
<Month>01</Month>
<Day>14</Day>
</DateRevised>
<Article PubModel="Print-Electronic">
<Journal>
<ISSN IssnType="Electronic">1878-5891</ISSN>
<JournalIssue CitedMedium="Internet">
<Volume>308</Volume>
<PubDate>
<Year>2014</Year>
<Month>Feb</Month>
</PubDate>
</JournalIssue>
<Title>Hearing research</Title>
<ISOAbbreviation>Hear Res</ISOAbbreviation>
</Journal>
<ArticleTitle>Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.</ArticleTitle>
<Pagination>
<MedlinePgn>122-8</MedlinePgn>
</Pagination>
<ELocationID EIdType="doi" ValidYN="Y">10.1016/j.heares.2013.08.018</ELocationID>
<ELocationID EIdType="pii" ValidYN="Y">S0378-5955(13)00216-5</ELocationID>
<Abstract>
<AbstractText>There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations.</AbstractText>
<CopyrightInformation>Copyright © 2013 Elsevier B.V. All rights reserved.</CopyrightInformation>
</Abstract>
<AuthorList CompleteYN="Y">
<Author ValidYN="Y">
<LastName>François</LastName>
<ForeName>Clément</ForeName>
<Initials>C</Initials>
<AffiliationInfo>
<Affiliation>Cognition and Brain Plasticity Unit, Dept. of Basic Psychology (Campus de Bellvitge) & IDIBELL, University of Barcelona, Feixa Llarga s/n, 08907 L'Hospitalet (Barcelona), Spain; Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035 Barcelona, Spain.</Affiliation>
</AffiliationInfo>
</Author>
<Author ValidYN="Y">
<LastName>Schön</LastName>
<ForeName>Daniele</ForeName>
<Initials>D</Initials>
<AffiliationInfo>
<Affiliation>Aix-Marseille Université, INS, Marseille, France; INSERM, U1106, Marseille, France. Electronic address: daniele.schon@univ-amu.fr.</Affiliation>
</AffiliationInfo>
</Author>
</AuthorList>
<Language>eng</Language>
<PublicationTypeList>
<PublicationType UI="D016428">Journal Article</PublicationType>
<PublicationType UI="D016454">Review</PublicationType>
</PublicationTypeList>
<ArticleDate DateType="Electronic">
<Year>2013</Year>
<Month>09</Month>
<Day>12</Day>
</ArticleDate>
</Article>
<MedlineJournalInfo>
<Country>Netherlands</Country>
<MedlineTA>Hear Res</MedlineTA>
<NlmUniqueID>7900445</NlmUniqueID>
<ISSNLinking>0378-5955</ISSNLinking>
</MedlineJournalInfo>
<CitationSubset>IM</CitationSubset>
<MeshHeadingList>
<MeshHeading>
<DescriptorName UI="D000161" MajorTopicYN="N">Acoustic Stimulation</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D000818" MajorTopicYN="N">Animals</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001306" MajorTopicYN="N">Auditory Pathways</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="N">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001307" MajorTopicYN="N">Auditory Perception</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001931" MajorTopicYN="N">Brain Mapping</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D001933" MajorTopicYN="N">Brain Stem</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="N">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D004569" MajorTopicYN="N">Electroencephalography</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006309" MajorTopicYN="N">Hearing</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D006801" MajorTopicYN="N">Humans</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D007802" MajorTopicYN="N">Language</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D007858" MajorTopicYN="Y">Learning</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D008959" MajorTopicYN="N">Models, Neurological</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009146" MajorTopicYN="Y">Music</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D009474" MajorTopicYN="N">Neurons</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="N">physiology</QualifierName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D013016" MajorTopicYN="N">Sound</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D013060" MajorTopicYN="N">Speech</DescriptorName>
</MeshHeading>
<MeshHeading>
<DescriptorName UI="D013067" MajorTopicYN="N">Speech Perception</DescriptorName>
<QualifierName UI="Q000502" MajorTopicYN="Y">physiology</QualifierName>
</MeshHeading>
</MeshHeadingList>
</MedlineCitation>
<PubmedData>
<History>
<PubMedPubDate PubStatus="received">
<Year>2013</Year>
<Month>02</Month>
<Day>27</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="revised">
<Year>2013</Year>
<Month>08</Month>
<Day>21</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="accepted">
<Year>2013</Year>
<Month>08</Month>
<Day>26</Day>
</PubMedPubDate>
<PubMedPubDate PubStatus="entrez">
<Year>2013</Year>
<Month>9</Month>
<Day>17</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="pubmed">
<Year>2013</Year>
<Month>9</Month>
<Day>17</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
<PubMedPubDate PubStatus="medline">
<Year>2014</Year>
<Month>9</Month>
<Day>11</Day>
<Hour>6</Hour>
<Minute>0</Minute>
</PubMedPubDate>
</History>
<PublicationStatus>ppublish</PublicationStatus>
<ArticleIdList>
<ArticleId IdType="pubmed">24035820</ArticleId>
<ArticleId IdType="pii">S0378-5955(13)00216-5</ArticleId>
<ArticleId IdType="doi">10.1016/j.heares.2013.08.018</ArticleId>
</ArticleIdList>
</PubmedData>
</pubmed>
<affiliations>
<list>
<country>
<li>Espagne</li>
<li>France</li>
</country>
<region>
<li>Catalogne</li>
<li>Provence-Alpes-Côte d'Azur</li>
</region>
<settlement>
<li>Marseille</li>
</settlement>
<orgName>
<li>Université d'Aix-Marseille</li>
</orgName>
</list>
<tree>
<country name="Espagne">
<region name="Catalogne">
<name sortKey="Francois, Clement" sort="Francois, Clement" uniqKey="Francois C" first="Clément" last="François">Clément François</name>
</region>
</country>
<country name="France">
<region name="Provence-Alpes-Côte d'Azur">
<name sortKey="Schon, Daniele" sort="Schon, Daniele" uniqKey="Schon D" first="Daniele" last="Schön">Daniele Schön</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Sante/explor/SanteMusiqueV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000F75 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000F75 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Sante
   |area=    SanteMusiqueV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     pubmed:24035820
   |texte=   Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Main/Exploration/RBID.i   -Sk "pubmed:24035820" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd   \
       | NlmPubMed2Wicri -a SanteMusiqueV1 

Wicri

This area was generated with Dilib version V0.6.38.
Data generation: Mon Mar 8 15:23:44 2021. Site generation: Mon Mar 8 15:23:58 2021